28 research outputs found

    Urban Material Classification Using Spectral and Textural Features Retrieved from Autoencoders

    Get PDF
    Classification of urban materials using remote sensing data, in particular hyperspectral data, is common practice. Spectral libraries can be utilized to train a classifier since they provide spectral features about selected urban materials. However, urban materials can have similar spectral characteristic features due to high inter-class correlation which can lead to misclassification. Spectral libraries rarely provide imagery of their samples, which disables the possibility of classifying urban materials with additional textural information. Thus, this paper conducts material classification comparing the benefits of using close-range acquired spectral and textural features. The spectral features consist of either the original spectra, a PCA-based encoding or the compressed spectral representation of the original spectra retrieved using a deep autoencoder. The textural features are generated using a deep denoising convolutional autoencoder. The spectral and textural features are gathered from the recently published spectral library KLUM. Three classifiers are used, the two well-established Random Forest and Support Vector Machine classifiers in addition to a Histogram-based Gradient Boosting Classification Tree. The achieved overall accuracy was within the range of 70–80% with a standard deviation between 2–10% across all classification approaches. This indicates that the amount of samples still is insufficient for some of the material classes for this classification task. Nonetheless, the classification results indicate that the spectral features are more important for assigning material labels than the textural features

    A Scheme for the Detection and Tracking of People Tuned for Aerial Image Sequences

    Get PDF
    Abstract. This paper addresses the problem of detecting and tracking a large number of individuals in aerial image sequences that have been taken from high altitude. We propose a method which can handle the numerous challenges that are associated with this task and demonstrate its quality on several test sequences. Moreover this paper contains several contributions to improve object detection and tracking in other domains, too. We show how to build an effective object detector in a flexible way which incorporates the shadow of an object and enhanced features for shape and color. Furthermore the performance of the detector is boosted by an improved way to collect background samples for the classifier train-ing. At last we describe a tracking-by-detection method that can handle frequent misses and a very large number of similar objects

    Collaborative multi-scale 3D city and infrastructure modeling and simulation

    Get PDF
    Computer-aided collaborative and multi-scale 3D planning are challenges for complex railway and subway track infrastructure projects in the built environment. Many legal, economic, environmental, and structural requirements have to be taken into account. The stringent use of 3D models in the different phases of the planning process facilitates communication and collaboration between the stake holders such as civil engineers, geological engineers, and decision makers. This paper presents concepts, developments, and experiences gained by an interdisciplinary research group coming from civil engineering informatics and geo-informatics banding together skills of both, the Building Information Modeling and the 3D GIS world. New approaches including the development of a collaborative platform and 3D multi-scale modelling are proposed for collaborative planning and simulation to improve the digital 3D planning of subway tracks and other infrastructures. Experiences during this research and lessons learned are presented as well as an outlook on future research focusing on Building Information Modeling and 3D GIS applications for cities of the future

    Shape distribution features for point cloud analysis – a geometric histogram approach on multiple scales

    No full text
    Due to ever more efficient and accurate laser scanning technologies, the analysis of 3D point clouds has become an important task in modern photogrammetry and remote sensing. To exploit the full potential of such data for structural analysis and object detection, reliable geometric features are of crucial importance. Since multiscale approaches have proved very successful for image-based applications, efforts are currently made to apply similar approaches on 3D point clouds. In this paper we analyse common geometric covariance features, pinpointing some severe limitations regarding their performance on varying scales. Instead, we propose a different feature type based on shape distributions known from object recognition. These novel features show a very reliable performance on a wide scale range and their results in classification outnumber covariance features in all tested cases

    Shape distribution features for point cloud analysis - a geometric histogram approach on multiple scales

    No full text
    Due to ever more efficient and accurate laser scanning technologies, the analysis of 3D point clouds has become an important task in modern photogrammetry and remote sensing. To exploit the full potential of such data for structural analysis and object detection, reliable geometric features are of crucial importance. Since multiscale approaches have proved very successful for image-based applications, efforts are currently made to apply similar approaches on 3D point clouds. In this paper we analyse common geometric covariance features, pinpointing some severe limitations regarding their performance on varying scales. Instead, we propose a different feature type based on shape distributions known from object recognition. These novel features show a very reliable performance on a wide scale range and their results in classification outnumber covariance features in all tested cases

    DETECTION AND TRACKING OF VEHICLES IN LOW FRAMERATE AERIAL IMAGE SEQUENCES *

    No full text
    Traffic monitoring requires mobile and flexible systems that are able to extract densely sampled spatial and temporal traffic data in large areas in near-real time. Video-based systems mounted on aerial platforms meet these requirements, however, at the expense of a limited field of view. To overcome this limitation of video cameras, we are currently developing a system for automatic derivation of traffic flow data which is designed for commercial medium format cameras with a resolution of 25-40 cm and a rather low frame rate of only 1-3 Hz. In addition, the frame rate is not assumed to be constant over time. Novel camera systems as for instance DLR’s 3K camera image a scene with “bursts”, thereby each burst consisting of several frames. After a time gap of few seconds for readout, the next burst starts etc. This kind of imaging results in an along-track overlap of 90 % (and more) during bursts and less than 50% between bursts. These peculiarities need to be considered in the design of an airborne traffic monitoring system. We tested the system with data of several flight campaigns, for which also ground-truth data in form of car tracks is available. The evaluation of the results shows the applicability and the potentials of this approach

    Self-Localization of a Multi-Fisheye Camera Based Augmented Reality System in Textureless 3d Building Models

    No full text
    Georeferenced images help planners to compare and document the progress of underground construction sites. As underground positioning can not rely on GPS/GNSS, we introduce a solely vision based localization method, that makes use of a textureless 3D CAD model of the construction site. In our analysis-by-synthesis approach, depth and normal fisheye images are rendered from presampled positions and gradient orientations are extracted to build a high dimensional synthetic feature space. Acquired camera images are then matched to those features by using a robust distance metric and fast nearest neighbor search. In this manner, initial poses can be obtained on a laptop in real-time using concurrent processing and the graphics processing unit

    Review on Convolutional Neural Networks (CNN) in vegetation remote sensing

    No full text
    Identifying and characterizing vascular plants in time and space is required in various disciplines, e.g. in forestry, conservation and agriculture. Remote sensing emerged as a key technology revealing both spatial and temporal vegetation patterns. Harnessing the ever growing streams of remote sensing data for the increasing demands on vegetation assessments and monitoring requires efficient, accurate and flexible methods for data analysis. In this respect, the use of deep learning methods is trend-setting, enabling high predictive accuracy, while learning the relevant data features independently in an end-to-end fashion. Very recently, a series of studies have demonstrated that the deep learning method of Convolutional Neural Networks (CNN) is very effective to represent spatial patterns enabling to extract a wide array of vegetation properties from remote sensing imagery. This review introduces the principles of CNN and distils why they are particularly suitable for vegetation remote sensing. The main part synthesizes current trends and developments, including considerations about spectral resolution, spatial grain, different sensors types, modes of reference data generation, sources of existing reference data, as well as CNN approaches and architectures. The literature review showed that CNN can be applied to various problems, including the detection of individual plants or the pixel-wise segmentation of vegetation classes, while numerous studies have evinced that CNN outperform shallow machine learning methods. Several studies suggest that the ability of CNN to exploit spatial patterns particularly facilitates the value of very high spatial resolution data. The modularity in the common deep learning frameworks allows a high flexibility for the adaptation of architectures, whereby especially multi-modal or multi-temporal applications can benefit. An increasing availability of techniques for visualizing features learned by CNNs will not only contribute to interpret but to learn from such models and improve our understanding of remotely sensed signals of vegetation. Although CNN has not been around for long, it seems obvious that they will usher in a new era of vegetation remote sensing
    corecore